Image super-resolution is a common task on mobile and IoT devices, where one often needs to upscale and enhance low-resolution images and video frames. While numerous solutions have been proposed for this problem in the past, they are usually not compatible with low-power mobile NPUs having many computational and memory constraints. In this Mobile AI challenge, we address this problem and propose the participants to design an efficient quantized image super-resolution solution that can demonstrate a real-time performance on mobile NPUs. The participants were provided with the DIV2K dataset and trained INT8 models to do a high-quality 3X image upscaling. The runtime of all models was evaluated on the Synaptics VS680 Smart Home board with a dedicated edge NPU capable of accelerating quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 60 FPS rate when reconstructing Full HD resolution images. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
Instance segmentation in videos, which aims to segment and track multiple objects in video frames, has garnered a flurry of research attention in recent years. In this paper, we present a novel weakly supervised framework with \textbf{S}patio-\textbf{T}emporal \textbf{C}ollaboration for instance \textbf{Seg}mentation in videos, namely \textbf{STC-Seg}. Concretely, STC-Seg demonstrates four contributions. First, we leverage the complementary representations from unsupervised depth estimation and optical flow to produce effective pseudo-labels for training deep networks and predicting high-quality instance masks. Second, to enhance the mask generation, we devise a puzzle loss, which enables end-to-end training using box-level annotations. Third, our tracking module jointly utilizes bounding-box diagonal points with spatio-temporal discrepancy to model movements, which largely improves the robustness to different object appearances. Finally, our framework is flexible and enables image-level instance segmentation methods to operate the video-level task. We conduct an extensive set of experiments on the KITTI MOTS and YT-VIS datasets. Experimental results demonstrate that our method achieves strong performance and even outperforms fully supervised TrackR-CNN and MaskTrack R-CNN. We believe that STC-Seg can be a valuable addition to the community, as it reflects the tip of an iceberg about the innovative opportunities in the weakly supervised paradigm for instance segmentation in videos.
translated by 谷歌翻译
Remote photoplethysmography (rPPG) enables non-contact heart rate (HR) estimation from facial videos which gives significant convenience compared with traditional contact-based measurements. In the real-world long-term health monitoring scenario, the distance of the participants and their head movements usually vary by time, resulting in the inaccurate rPPG measurement due to the varying face resolution and complex motion artifacts. Different from the previous rPPG models designed for a constant distance between camera and participants, in this paper, we propose two plug-and-play blocks (i.e., physiological signal feature extraction block (PFE) and temporal face alignment block (TFA)) to alleviate the degradation of changing distance and head motion. On one side, guided with representative-area information, PFE adaptively encodes the arbitrary resolution facial frames to the fixed-resolution facial structure features. On the other side, leveraging the estimated optical flow, TFA is able to counteract the rPPG signal confusion caused by the head movement thus benefit the motion-robust rPPG signal recovery. Besides, we also train the model with a cross-resolution constraint using a two-stream dual-resolution framework, which further helps PFE learn resolution-robust facial rPPG features. Extensive experiments on three benchmark datasets (UBFC-rPPG, COHFACE and PURE) demonstrate the superior performance of the proposed method. One highlight is that with PFE and TFA, the off-the-shelf spatio-temporal rPPG models can predict more robust rPPG signals under both varying face resolution and severe head movement scenarios. The codes are available at https://github.com/LJW-GIT/Arbitrary_Resolution_rPPG.
translated by 谷歌翻译
Recent advances in neural approaches greatly improve task-oriented dialogue (TOD) systems which assist users to accomplish their goals. However, such systems rely on costly manually labeled dialogs which are not available in practical scenarios. In this paper, we present our models for Track 2 of the SereTOD 2022 challenge, which is the first challenge of building semi-supervised and reinforced TOD systems on a large-scale real-world Chinese TOD dataset MobileCS. We build a knowledge-grounded dialog model to formulate dialog history and local KB as input and predict the system response. And we perform semi-supervised pre-training both on the labeled and unlabeled data. Our system achieves the first place both in the automatic evaluation and human interaction, especially with higher BLEU (+7.64) and Success (+13.6\%) than the second place.
translated by 谷歌翻译
我们介绍了一种新的图像取证方法:将物理折射物(我们称为图腾)放入场景中,以保护该场景拍摄的任何照片。图腾弯曲并重定向光线,因此在单个图像中提供了多个(尽管扭曲)的多个(尽管扭曲)。防守者可以使用这些扭曲的图腾像素来检测是否已操纵图像。我们的方法通过估计场景中的位置并使用其已知的几何和材料特性来估算其位置,从而使光线通过图腾的光线不十障。为了验证图腾保护的图像,我们从图腾视点重建的场景与场景的外观从相机的角度来检测到不一致之处。这样的方法使对抗性操纵任务更加困难,因为对手必须以几何一致的方式对图腾和图像像素进行修改,而又不知道图腾的物理特性。与先前的基于学习的方法不同,我们的方法不需要在特定操作的数据集上进行培训,而是使用场景和相机的物理属性来解决取证问题。
translated by 谷歌翻译
传统意图分类模型基于预定义的意图集,仅识别有限的内域(IND)意图类别。但是用户可以在实用的对话系统中输入室外(OOD)查询。这样的OOD查询可以提供未来改进的方向。在本文中,我们定义了一项新任务,广义意图发现(GID),旨在将IND意图分类器扩展到包括IND和OOD意图在内的开放世界意图集。我们希望在发现和识别新的未标记的OOD类型的同时,同时对一组标记的IND意图类进行分类。我们为不同的应用程序方案构建了三个公共数据集,并提出了两种框架,即基于管道的框架和端到端,以实现未来的工作。此外,我们进行详尽的实验和定性分析,以理解关键挑战,并为未来的GID研究提供新的指导。
translated by 谷歌翻译
视觉宣传活动的挑战性输入设置之一是,当初始摄像头视图相距甚远时。这样的设置很困难,因为宽的基线会导致物体外观发生巨大变化并引起阻塞。本文为宽基线图像提供了一种新颖的自我监督的视觉伺服伺服方法,这不需要3D地面真相监督。回归绝对相机相对于对象的现有方法需要以3D边界框或网格的形式的对象的3D地面真实数据。我们通过利用称为3D均衡的几何特性来了解连贯的视觉表示形式 - 表示表示作为3D转换的函数以可预测的方式进行转换。为了确保功能空间忠实于基础的大地测量空间,地球保留的约束与均衡相结合。我们设计了一个暹罗网络,该网络可以有效地强制执行这两个几何特性,而无需3D监督。借助学习的模型,可以简单地通过在学习空间中的梯度并用作闭环视觉陶器的反馈来推断相对转换。我们的方法对来自YCB数据集的对象进行了评估,在使用3D监督的最新方法方面显示了视觉伺服任务上有意义的超越性能或对象对齐任务。我们的平均距离误差降低超过35%,成功率超过90%,误差耐受性。
translated by 谷歌翻译
最近对结构偏见进行了针对情感三胞胎提取(ASTE)的利用,并改善了性能。另一方面,人们认识到,明确纳入结构偏见会对效率产生负面影响,而预验证的语言模型(PLM)已经可以捕获隐式结构。因此,出现了一个自然的问题:在PLM的背景下,结构性偏见仍然是必要的吗?为了回答这个问题,我们建议通过使用适配器在PLM中整合结构偏置并使用便宜的计算相对位置结构来代替句法依赖性结构来解决效率问题。基准评估是在Semeval数据集上进行的。结果表明,我们提出的结构适配器对PLM有益,并在一系列强大的基准范围内实现最先进的性能,但具有光参数需求和延迟较低。同时,我们引起了人们的担忧,即当前的评估默认值为小规模的数据不足。因此,我们为ASTE发布了一个大型数据集。新数据集的结果暗示,结构适配器在大规模上自信地有效和有效。总体而言,我们得出一个结论,即即使使用PLM,结构偏见仍然是必要的。
translated by 谷歌翻译
使用量子卷积神经网络(QCNN)的机器学习在量子和经典数据分类中都取得了成功。在先前的研究中,在少数参数制度中,在相同的训练条件下,QCNN的分类准确性比其经典对应物具有更高的分类精度。但是,由于量子电路的大小有限,因此很难检查大规模量子模型的一般性能,这可以在不久的将来可靠地实施。我们建议转移学习是在嘈杂的中间量子量子时代利用小QCNN的有效策略。在经典到量词转移学习框架中,QCNN可以通过使用预训练的经典卷积神经网络(CNN)来解决复杂的分类问题,而无需大规模量子电路。我们对QCNN模型进行了数值模拟,并在转移学习下对MNIST数据分类进行了各种量子卷积和汇总操作,其中经典的CNN经过了时尚持续数据的培训。结果表明,在相似的训练条件下,从经典到量子CNN的转移学习比纯粹的经典转移学习模型要好得多。
translated by 谷歌翻译
对话机器人已广泛应用于客户服务方案,以提供及时且用户友好的体验。这些机器人必须对对话的适当域进行分类,了解用户的意图并产生适当的响应。现有的对话预训练模型仅针对多个对话任务而设计,而忽略了弱监督的客户服务对话中的专家知识。在本文中,我们提出了一个新颖的统一知识提示预训练框架,ufa(\ textbf {u} nified Model \ textbf {f}或\ textbf {a} ll任务),用于客户服务对话。我们将客户服务对话的所有任务作为统一的文本到文本生成任务,并引入知识驱动的及时策略,以共同从不同的对话任务中学习。我们将UFA预先训练UFA,从实用场景中收集的大型中国客户服务语料库中,并对自然语言理解(NLU)和自然语言生成(NLG)基准进行了重大改进。
translated by 谷歌翻译